专利摘要:
A method of detecting fraud when using a device for capturing an impression of a body part using a total reflection principle with a dark background and comprising a transparent plate on which a body part to be verified is placed. The method comprises: utilizing said device for acquiring (61) a first impression image with illumination of the body portion such that the entire surface of the body portion in contact with the transparent slide returns light; using said device to acquire (63) at least a second impression image by illuminating the body portion with a single LED; obtaining (67) for each second imprint image information representative of a level of light re-emitted by the body part using each image obtained; and, comparing (68) each information obtained with reference information to validate that the body part is a true body part.
公开号:FR3065306A1
申请号:FR1753180
申请日:2017-04-12
公开日:2018-10-19
发明作者:Joel-Yann Fourre;Jean Beaudet
申请人:Safran Identity and Security SAS;
IPC主号:
专利说明:

Holder (s): SAFRAN IDENTITY AND SECURITY Société anonyme.
Extension request (s)
Agent (s): LE GUEN & ASSOCIES Professional civil society.
FR 3 065 306 - A1 (54) METHOD FOR DETECTING FRAUD.
©) Method for detecting fraud when using a device for capturing an imprint of a body part using a principle of total reflection with a dark background and comprising a transparent slide on which a body part to be checked is placed . The process includes:
using said device to acquire (61) a first impression image with illumination of the body part such that the entire surface of the body part in contact with the transparent blade returns light;
using said device to acquire (63) at least a second fingerprint image by illuminating the body part with a single LED;
obtaining (67) for each second imprint image information representative of a level of light re-emitted by the body part using each image obtained; and, compare (68) each information obtained with reference information to validate that the body part is a real body part.

i
The invention relates to a method for detecting fraud when using a device for capturing an imprint of a body part using a principle of total reflection with a dark background.
Context of the invention
The use of fingerprints, for example of the fingerprint type, of a plurality of fingers, of a palm, makes it possible to secure accesses to buildings or to machines. Using this technology increases security since the probability that two people will have two identical fingerprints is almost zero.
A fingerprint capture device captures an image of a fingerprint. In the case of an identification, this fingerprint is compared with a set of reference fingerprints contained in a database. In the case of authentication, this fingerprint is compared to a single fingerprint. The comparison makes it possible to determine whether or not the captured fingerprint belongs to a person referenced in the database or if the person is who he claims to be. Some malicious people attempt to identify themselves (respectively authenticate) fraudulently by using decoys in order to mislead identification devices (respectively authentication).
Different validation methods are known to validate the fact that the skin present in front of the fingerprint capture device is true and therefore that the finger carrying the fingerprint is a real finger.
Certain known methods are entirely based on an analysis of images, in particular by identifying the artefacts involved in fraud. These methods are not, however, robust to careful fraud.
Other methods are also known for capturing a series of images of the finger and for measuring, for example, sweating, pulse, oximetry, whitening of the finger during pressing on the capture surface.
However, such methods require an incompressible acquisition time since it is linked to the speed of evolution of the phenomenon observed, which degrades the ergonomics of the sensor.
The document FR3015728 describes a method making it possible to validate that a body element such as a lower surface of one or more fingers or the palm of a hand is covered with real skin. The principle consists more precisely in illuminating the surface of the element with the aid of a light source making it possible to illuminate only one zone, known as a well-defined illuminated zone of the element and to maintain an zone without direct illumination. , called diffusion zone. An image of these two areas is then captured and analyzed to deduce that said element is covered with real skin or false skin. In one embodiment of this method, an analysis zone covering the illuminated zone and the diffusion zone is divided into several calculation zones. An average light intensity of the pixels is then calculated for each calculation area, which makes it possible to obtain a light intensity curve and a curve of the light intensity gradient as a function of the distance from the calculation area to the limit of the illuminated area and compare the characteristics of these curves with those extracted from reference curves.
In the previous method, only the light rays from the body element resulting from a diffusion in the body element are used. The method would be improved if it were possible to easily obtain an image of the body element representing only rays of light from a diffusion in the body element.
It is desirable to overcome these drawbacks of the state of the art. It is in particular desirable that this method can be implemented in an admissible time for a user.
STATEMENT OF THE INVENTION
According to a first aspect of the invention, the invention relates to a method of detecting fraud during the use of a device for capturing an imprint of a body part using a principle of total reflection with a dark background, said device comprising a first transparent strip (120) comprising an upper face on which a body part to be checked is placed, a light source (121) comprising a plurality of LEDs lighting exclusively in the direction of the upper face, a light-opaque screen located below the first transparent blade, a second transparent blade located below the opaque screen and an image sensor (124) located below the second transparent blade (123), the two transparent blades, the opaque screen and the image sensor being parallel and the opaque screen (122) comprising an array of holes intended to leave light rays coming from the light source euse to reach said image sensor (124), said light rays allowing the image sensor to generate an image of said imprint, characterized in that the method comprises:
using said device to acquire (61) a first impression image with lighting of the body part to be checked by the light source, said first type of lighting, such as the entire surface of the body part to be checked in contact with the transparent blade reflects light;
using said device to acquire (63) at least a second imprint image with lighting of the body part to be checked by the light source, said second type of lighting, such that at least one LED is lit, when a plurality of LEDs is lit, said LEDs are at least distant from a predefined distance such that the sub-portion of finger D reflecting the light emitted by an LED is disjoined from any other sub-portion of finger D returning light illuminated by another LED;
obtaining (67) for each second imprint image information representative of a level of light re-emitted by the body part to be verified by using said second imprint image and the first imprint image; and, comparing (68) said information with reference information representative of a level of light re-emitted by a real body part placed on the transparent slide to validate that the body part to be checked is a real body part.
In this way, fraud detection is improved.
According to one embodiment, the information representative of a light level re-emitted by the body part to be checked comprises at least one curve representing a light intensity emitted by the body part and / or at least one curve of light intensity gradients emitted by the body part as a function of a distance from a center of an area directly illuminated by the light source when the latter illuminates the body part according to the second type of lighting, each curve being obtained by taking into account only pixel light intensities corresponding to peaks of the body part footprint, said pixels being identified using the first footprint image.
According to one embodiment, the information representative of a level of light re-emitted by the body part to be checked comprises at least one curve representing normalized scalar products calculated between the first and the second imprint images and / or at least one curve representing gradients of normalized scalar products calculated between the first and second footprint images, as a function of a distance from a center of an area directly illuminated by the light source when the latter illuminates the body part according to the second type of lighting.
According to one embodiment, the information representative of a level of light re-emitted by the body part to be verified further comprises an albedo measurement, said measurement being obtained from the first imprint image.
According to one embodiment, when a plurality of LEDs is lit simultaneously to obtain a second imprint image, obtaining the information representative of a level of light re-emitted by the body part to be verified and comparing said information to a reference information is made for each sub-part to validate that the body part to be checked is a real body part.
According to one embodiment, when a plurality of second fingerprint images is obtained, the second type of lighting is modified for each second fingerprint image and the obtaining of the information representative of a level of re-emitted light. by the body part to be checked and the comparison of said information with reference information are carried out for each second impression image to validate that the body part to be checked is a real body part.
According to one embodiment, the modification of the second type of lighting consists in varying the wavelength emitted by the lighting system or the position of each lit LED of the lighting system for each second fingerprint image.
According to one embodiment, prior to the acquisition of each second image, the method comprises choosing at least one LED to be lit for the acquisition of each second image according to a first predefined criterion using the first image.
According to one embodiment, the method further comprises locating the finger in the first image and choosing each LED to light according to the position of the finger thus located.
According to one embodiment, when a border separating two zones is detected in the first imprint image, a first second imprint image is acquired by illuminating the body part with a first LED generating a portion returning light from a first side of the border and a second second imprint image is acquired by illuminating the body part with a second LED generating a sub-part returning light from a second side of the border, and the information representative of a level of light re-emitted by the body part to be checked from each second image which must be similar according to a second predefined criterion to validate that the body part to be checked is a real body part.
According to one embodiment, when a single LED is chosen, the LED chosen is the LED closest to a position of a barycenter of the body part in contact with the transparent blade or closest to a center of curvature of the crests of the imprint of the body part in contact with the transparent blade, or closest to a barycenter of minutiae detected in the crests of said imprint.
According to one embodiment, the method further comprises: acquiring an additional imprint image without lighting by the light source; subtracting the additional fingerprint image from the first and each second image such that the first and each second fingerprint image used when obtaining information representative of a level of light re-emitted by the body part to be checked and the comparison of said information with reference information are images resulting from this subtraction.
According to a second aspect of the invention, the invention relates to a device comprising means for implementing the method according to the first aspect.
According to a third aspect of the invention, the invention relates to equipment comprising a device according to the second aspect
According to a fourth aspect, the invention relates to a computer program, comprising instructions for implementing, by a device, the method according to the first aspect, when said program is executed by a computing unit of said device.
According to a fifth aspect, the invention relates to storage means, storing a computer program comprising instructions for implementing, by a device, the method according to the first aspect, when said program is executed by a unit for calculating said device.
BRIEF DESCRIPTION OF THE DRAWINGS
The characteristics of the invention mentioned above, as well as others, will appear more clearly on reading the following description of an exemplary embodiment, said description being made in relation to the accompanying drawings, among which:
- Fig. 1 schematically illustrates an equipment comprising a device for capturing an imprint of a body part according to the invention;
- Fig. 2 schematically illustrates a first embodiment of a device for capturing an imprint of a body part;
- Fig. 3 schematically illustrates a front view sub-part of a light-sensitive sensor adapted for the first embodiment of the device for capturing an imprint of a body part;
- Fig. 4A schematically illustrates a second embodiment of the device for capturing an imprint of a body part;
- Fig. 4B schematically illustrates an operation of the second embodiment of the device for capturing an imprint of a body part;
- Fig. 4C schematically illustrates a front view sub-part of a sensor adapted to the second embodiment of the device for capturing an imprint of a body part;
- Fig. 5 schematically describes a device for capturing a fingerprint of the prior art working in total reflection operating in a dark background;
- Fig. 6 describes a method of detecting fraud according to the invention; and,
- Fig. 7 schematically illustrates an example of hardware architecture of a processing module implementing the fraud detection method
DETAILED DESCRIPTION OF VARIOUS EMBODIMENTS
The description which follows details more particularly embodiments of the present invention in the context of a smart phone (“smartphone” in English terminology). The invention can be applied to other equipment which may comprise a device for capturing an imprint of a body part such as a computer, a tablet, an entry / exit control device in a building, etc. . Furthermore, the invention is described in a context where the body part is a finger. However, it applies to other body parts such as several fingers, a palm, etc.
Fig. 1 schematically illustrates equipment comprising a device for capturing an imprint of a body part according to the invention.
The equipment 1 is here a smart phone comprising a screen 10, a processing module 11 and a device for capturing an imprint of a body part 12. We will hereinafter call the device for capturing an imprint of a a biometric device body part. The processing module 11 can implement several functionalities of the equipment 1 including in particular processing of the data from the biometric device 12 as we describe below in relation to FIG. 6. The biometric device 12 is for example used by an owner of the equipment 1 to authenticate himself with the equipment 1 and thus be able to use it.
In a particular implementation, the device for capturing an imprint of a body part 12 is integrated into the screen 10.
Fig. 2 schematically illustrates a first embodiment of the biometric device 12 suitable for implementing the method according to the invention.
The biometric device 12 uses the principle of total reflection with a dark background which we recall below in relation to FIG. 5.
Fig. 5 schematically describes a device for capturing a fingerprint working in total reflection and operating in a dark background.
The device 50 described in FIG. 5 includes a prism 500, a light source 50IB, and an optical system 502 such as for example a CCD sensor (Charge-Coupled Device) or CMOS (Complementary Metal Oxide Semiconductor) in English terminology. Saxon) and one or more lenses.
The light source 50IB generates a light beam which passes through a first face 500B of the prism 500 to a second face 500C of the prism 500 where a finger D is positioned. The light beam generated by the light source 50 IB forms an incident angle a B with the normal to the face 500C less than a critical angle 6 C and a limit angle 0, (here, the incident angle a B is zero degrees). The critical angle 0 C (resp. The limiting angle 0 ( ) is defined as the angle beyond which the total reflection occurs when a beam reaches the face 500C when the second medium is air (resp. when the second medium is finger D). The light beam generated by the source 501B is therefore not totally reflected by the face 500B.
The optical system 502 receives the light beam generated by the source 501B after diffusion by the finger D. The optical system 502 is configured so as to receive light beams after diffusion in the finger D forming an angle between the critical angle 0 C and the limit angle 0 ( with the normal to the face 500C. The optical system 502 therefore receives only light beams resulting from a diffusion in the finger D but no light beam resulting from a reflection on the upper face 500C. The optical system 502 forms an image of the fingerprint with high contrast between the valleys and the ridges of the fingerprint.The ridges correspond to scattered light beams, partially absorbed in finger D and which have come out of finger level of the ridges in contact with the face 500C to reach the optical system 502. No light beam scattered in the finger D and emerging from the finger D at the level of the valleys can t reach the optical system 502 because they cannot cross the air layer and then propagate in the prism 500 while forming an angle with respect to the normal to the face 500C greater than the critical angle 0 C. The ridges therefore appear brighter in the footprint image than the valleys. A similar device can be found in French patent FR2757974.
The critical angle 0 C is given by the following formula:
n-Létant the refractive index of the prism and n Q being the refractive index of air or the finger. For an air refraction index equal to “1” and a prism refraction index equal to “1.5”, a critical angle 0 C = 41.8 degrees is obtained. The refractive index of the skin is, in the visible range, between “1.41” and “1.47”. By considering the minimum value of "1.41", we thus obtain a limit angle 0 ( of "70" degrees. By considering the maximum value, we obtain an angle 0 ( max of "76" degrees.
Returning to FIG. 2, the biometric device 12 comprises a plurality of optically coupled elements comprising:
• a first transparent blade 120 of thickness E 120 comprising an upper face 1200 on which the body part can rest (here finger D) carrying the fingerprint to be imaged;
• a light source 121 adapted to illuminate the finger D composed here of LEDs. Four LEDs 121A, 121B, 121C and 121D are shown in FIG. 2;
• a light opaque screen 122 located below the first transparent plate 120;
• a second transparent blade 123 of thickness ^ 123 located below the opaque screen 122;
• a sensor 124 comprising photoreceptors sensitive to light located below the second transparent plate 123.
The transparent plates 120 and 123 have a refractive index greater than a predefined minimum refractive index n min greater than the refractive index of air.
In a particular implementation, this minimum refractive index n min is greater than the refractive index of the finger, more particularly greater than 1.47. It is known in fact that when the refractive index of at least the second blade 123 is greater than the refractive index of the finger, the image of the finger is of finite extent.
In the remainder of the description, for simplicity, we assume that the two transparent blades 120 and 123 have an identical refractive index and for example equal to "1.5".
It should be understood by optically coupled that a radius going from the upper face of the first blade to the underside of the second blade does not pass through any medium with an index of less than n min . This can be done in the case of two glass slides, for example by bonding the two slides with an adhesive of sufficient refractive index.
The faces of the transparent blades 120 and 123, the opaque screen 122 and the sensor 124 are parallel. Here, the opaque screen 122 and the sensor 124 are considered to be blades of thickness less than the thickness of the two transparent blades 120 and 123.
Each LED of the light source 121 is adapted to generate a light beam above the opaque screen 122 in the direction of the upper face 1200. Each LED is configured so that each light beam emanating from this LED has an incident angle compared to normal on the upper face 1200 weak and in all cases less than the critical angle 0 C. In this way, no light ray emanating from an LED of the light source 121 undergoes total reflection on the upper face ίο 1200. With such a configuration of the light source 121, the entire surface of the finger D facing the face 1200 is not directly lit. Only zones, so-called illuminated zones, of limited dimensions (ie almost punctual) and well defined are directly lit by each LED. In order for the assembly of finger D facing the upper face 1200 to return light, the LEDs of the light source 121 are configured so that each LED and the LED (s) which are closest to said LED generate on finger D of the parts illuminated directly by the LEDs distant by a distance δ less than a predefined distance d p , characteristic of a depth of penetration of light into finger D. The distance δ is the minimum distance between two borders of lit parts. The predefined distance d p varies from about a millimeter for blue light to a few centimeters for infrared light. The sensor 124 therefore receives light beams from a diffusion by the finger D of the light rays produced by the light source 121. The biometric device 12 is therefore a device for capturing a fingerprint working in total reflection with a dark background.
The opaque screen 122 is a thin layer which can be produced for example by printing or by depositing an opaque coating on the blade 123. The opaque screen 122 is not however completely opaque since it is composed of a network of holes. Each ray of light directed towards the sensor 124 which reaches the opaque screen 122 at the level of a hole crosses the opaque screen 122 and reaches the sensor 124.
In a particular implementation, the opaque screen 122 is a thin layer which can be produced by printing or by depositing an absorbent coating on the upper face of the transparent blade 123 or on the lower face of the transparent blade 120 such than a metallic deposit. Each hole of the opaque screen 122 is filled with a material having a refractive index greater than the predefined minimum refractive index n min .
The sensor 124 is for example a CCD sensor or a CMOS sensor composed of a matrix of photoreceptors (such as photoreceptor 1241) sensitive to the wavelength (or wavelengths) of the light beams emitted by the light source 121. The sensor 124 is optically coupled to the blade 123. The sensor 124 receives light passing through the holes in the opaque screen 122 and generates information from the received light which is used by the processing module 11 to produce a footprint image. The imprint image thus produced is composed of a matrix of pixels, each pixel coming from one or more photoreceptors. To obtain a good contrast between the ridges and the valleys of the fingerprints, only the light rays coming from finger D having an angle of incidence relative to the normal on the upper face 1200 comprised between the critical angle 0 C and the limit angle 0, are considered.
In order to prevent light rays having an angle of incidence less than the critical angle 0 C from being taken into account in the imprint images generated by the processing module 11, the sensor 124 does not include photoreceptors sensitive to the light at each position of the sensor which can be struck by a light ray coming from the finger D having an angle of incidence relative to the normal to the upper face 1200 less than the critical angle 0 C. In this way, only information coming from photoreceptors located at positions which can be struck by light rays coming from finger D after diffusion into finger D having an angle of incidence relative to normal to the upper face 1200 comprised between the the critical angle 0 C and the limit angle 0 ( are used by the processing module 11 to form fingerprint images.
In a particular implementation, each photoreceptor of the sensor 124 corresponding to a position which can be struck by a light ray coming from the finger D after diffusion into the finger D having an angle of incidence relative to the normal to the upper face 1200 lower at the critical angle 0 C is masked by an opaque metallic layer, for example aluminum. The photoreceptors located below the opaque metallic layer therefore become non-sensitive to light and therefore cannot provide information corresponding to light rays coming from finger D after diffusion into finger D having an angle of incidence relative to than normal on the upper face 1200 less than the critical angle 0 C at the processing module 11. It is known that a photoreceptor from a CCD or CMOS sensor struck by a beam of light risks disturbing the photoreceptors in its vicinity , especially when these photoreceptors are highly saturated (for example when the sensor 124 is directed towards the sun). An advantage of this particular implementation is that, the masking of the photoreceptors which can be struck by a light ray coming from the finger D after diffusion into the finger D having an angle of incidence relative to the normal to the upper face 1200 less than the critical angle 0 C prevents these photoreceptors from disturbing their neighbors.
To prevent the incidence zones from overlapping, the holes in the opaque screen 122 are arranged so that the distance L between a hole and the hole or holes which are its closest neighbors, taken from center to center, is greater than the diameter of an image of the finger D projected on the sensor 124 seen by a hole when the finger D is placed on the upper face 1200. If d T is the diameter of the hole, the diameter of the projection of the finger D on the sensor 124 seen through an AP hole is given by:
d AP = d T + 2.Ε 123 .ΐΆη (θ,) and therefore:
L · 5 * d AP
In a particular implementation, the holes of the opaque screen 122 are spaced apart from each other by a distance L> d AP and, provided that the constraint on the distance L is respected, placed in any manner on the opaque screen 122 .
In a particular implementation, the holes of the opaque screen 122 are spaced between them by a distance L> d AP and placed regularly, for example in the form of a rectangular matrix or a hexagonal mesh, on the opaque screen 122.
In Fig. 2, the photoreceptors of sensor 124 shown in white (such as photoreceptor 1241) are photoreceptors sensitive to light. The photoreceptors of sensor 124 shown in black (such as photoreceptor 1242) are photoreceptors that are not sensitive to light.
Fig. 3 schematically illustrates a front view sub-part of the sensor 124 adapted for the first embodiment of the biometric device 12.
We place ourselves here in the case where the holes of the opaque screen 122 are spaced between them by a distance L> d AP and placed regularly in the form of a rectangular matrix of holes.
The sensor 124 consists of a matrix of square photoreceptors generally from "1" to "10" μητ aside.
Superimposed on the sensor 124 is shown a series of incidence zones distributed regularly over the sensor 124. Each incidence zone comprises a central disk such as the disk 1244 and a peripheral ring such as the ring 1243, the central disk and the the peripheral ring of an incidence zone being concentric. Each incidence zone corresponds to one of the holes in the opaque screen 122 and represents an image of the finger D projected on the sensor 124 seen by said hole when the finger D is placed on the upper face 1200. For example, the zone d incidence including the central disc 1244 and the peripheral ring 1243 corresponds to the hole 122A. The diameter of each peripheral ring therefore corresponds to the diameter d AP of an image of the finger D projected on the sensor 124 seen by a hole when the finger D is placed on the upper face 1200.
The holes in the opaque screen 122 taking the form of a rectangular matrix of holes, the incidence zones follow this shape on the sensor 124. When the holes in the opaque screen 122 are circular, the center of the zone d 'incidence corresponding to a hole and the center of said hole are combined. The part located in a peripheral ring (for example the peripheral ring 1243) corresponds to an area receiving light rays having passed through the opaque screen 122 through a hole (here the hole 122A) and having an incident angle with the normal to the upper face 1200 comprised between the critical angle 6 C and the limiting angle 0 ,. The part located inside the central disc (for example the central disc 1244) corresponds to an area receiving light rays having passed through the opaque screen 122 through a hole (here the hole 122A) and having an incident angle with the normal at the upper face 1200 less than the critical angle 0 C. Each part of the sensor 124 located inside a central disk therefore corresponds to a part of which information is not desired. The photoreceptors located in each of these parts must therefore be non-sensitive to light. Each part of the sensor 124 located in a peripheral ring therefore corresponds to a part from which it is desired to recover information. The photoreceptors located in each of these parts must therefore be sensitive to light. The pixels situated outside a peripheral ring receive little, if ever, light from the finger if the refractive index of the finger placed on the upper face 1200 is lower than the refractive index of the transparent plates 120 and 123.
It is noted that the distance L between each hole makes it possible to image at least once each point of the finger D opposite the upper face 1200. The biometric device 12 having a known geometry, it is possible to determine which photoreceptor (s) s) of the sensor 124 image (s) a point of the finger D. It then becomes possible to reconstruct an image of the imprint of the finger D by known techniques.
The processing module 11 takes into account the information from each photoreceptor having imaged a point to generate a representation of this point in the image of the imprint. During this generation of a representation, the processing module 11 rebalances the information from each photoreceptor with each other, taking into account, for each photoreceptor, information representative of a distance between said photoreceptor and the point which has been imaged. In a particular implementation, when the same point of the finger D is imaged by several photoreceptors of the sensor 124, following rebalancing, the processing module 11 calculates an average of the information from each photoreceptor having imaged this point to generate a representation from this point in a footprint image.
In a particular implementation, the transparent blades 120 and 123 are square glass blades with a "4.4" mm side and, as we have seen above, with a refractive index = 1.5. The sensor 124 has a square shape of “3.9” mm on the side comprising square photoreceptors of “4” μτη on the side.
In a particular implementation, the transparent blade 123 has a thickness ^ 123 three to ten times less than the thickness E 120 of the transparent blade 120. For example, the thickness E 122 = 60 μτη and the thickness E 120 = 300μτη allow, when the two blades have the same refractive index, to obtain a magnification of - V5 (ie an image of an object on the sensor 124 is five times smaller than the real object placed on the upper face 1200 and conversely, an area on the sensor 124 corresponds to an area 5 times larger on the upper face 1200). In this particular implementation, the transparent strip 123 is bonded to the sensor 124 or produced by a series of deposition on the sensor 124.
Fingerprint imaging standards recommend finger image resolutions greater than 500 or 1000 dots per inch (dpi). With a magnification of - if you want a finger image sampled at more than 500 dpi (resp. 1000 dpi), you need pixels of less than 10 pm (resp. Less than 5 pm).
In a particular implementation, the holes of the opaque screen 122 have a diameter of "7" μτη and form a regular matrix of 10 × 10 holes in which the holes are spaced a distance Ε = 400μτη from center to center each others as shown in FIG. 3. With a critical angle 0 C = 41.8 degrees, a limiting angle = 70 degrees, a hole diameter of "7" μτη and a thickness of the blade 123 of "60" μτη, each central disc has a diameter of approximately "Ι Ι4" μζη and each peripheral ring has an outside diameter of approximately "337" μτη.
In a particular implementation, the underside of the opaque screen 122 and the edges of the holes are made absorbent by applying known techniques (application of a layer of black chrome, of a layer of ink, texturing of the underside, etc.) in order to minimize reflections between the opaque screen 122 and the sensor 124.
In a particular implementation, the LEDs, which are shown in FIG. 2 apart from the blades 120 and 123 for clarity, are either integrated on a lower face of the blade 120 (ie the face of the blade 120 which is in contact with the opaque screen 122), or integrated on the upper face of the blade 123 (ie the face of the blade 123 comprising the opaque screen 122).
In a particular implementation, the LEDs are LEDs with gallium nitride (GaN) or OLEDs.
Fig. 4A schematically illustrates a second embodiment of the biometric device 12 according to the invention.
In this embodiment, we find the blade 120, the opaque screen 122, the blade 123 and the sensor 124.
In this embodiment, the light source is no longer located at the opaque screen 122. LEDs are inserted at the sensor 124, ie under the opaque screen 122. At least part of the holes in the the opaque screen 122 has an LED opposite.
So that the assembly of the finger D facing the upper face 1200 diffuses light, the holes in the opaque screen 122 having an LED facing each other are arranged so that each LED and the LED (s) which are the closest neighbors to said LED generate parts directly lit by the LEDs distant by a distance δ less than the distance d p .
Furthermore, in order to avoid overlaps between the incidence zones, in all the specific implementations relating to the embodiment of FIG. 4A, the holes in the opaque screen 122 used to image the finger D are arranged such that the minimum distance L between a hole and the hole or holes which are its closest neighbors, taken from center to center, is greater than diameter of the image of the finger D seen by a hole when the finger D is placed on the upper face 1200, that is to say L> d AP .
Each LED can be produced by deposition on the sensor 124. In this case, each LED is integrated on the surface of the sensor 124.
In a particular implementation, each LED is integrated into the sensor
124.
In a particular implementation, each LED generates a beam of light directed towards the holes having a maximum angle of incidence 0 max relative to a normal to the upper face 1200 making it possible to avoid that after reflection on the opaque screen 122, these LEDs light up light sensitive photoreceptors. In a particular implementation, 0 max = 23 degrees.
Fig. 4C schematically illustrates a front view sub-part of an example of a sensor 124 adapted to the second embodiment of the biometric device 12.
In the case of FIG. 4C, the holes in the opaque screen 122 form a rectangular matrix of holes.
In Figure 4C, LEDs have been inserted at positions corresponding to the center of each central disk. There are LEDs 121A and 121B shown in Fig. 4A, and LEDs 121C, 121D, 121E and 121F. There is therefore an LED at each position of the sensor 124 receiving light rays having passed through the opaque screen 122 through a hole and having an incident angle with the normal to the upper face 1200 less than the critical angle 0 C. In this particular implementation, there is therefore an LED positioned at the level of the sensor 124 opposite each hole of the opaque screen 122. Therefore, in the same way as the holes form a matrix of holes on the opaque screen 122, the LEDs form an LED array on the sensor 124. As in the first embodiment of the biometric device 12, the photoreceptors located in a central disk are non-sensitive to light.
With circular holes, each LED illuminates a disk opposite it on finger D when the latter is placed on the upper face 1200. When, for example, the transparent blade 120 and the transparent blade 123 have a thickness of respectively ^ 123 = 60 / zm and E 12 o = 300 / zm, the holes have a diameter of "7" μτη, and each LED is circular with a diameter of "10" μτη, each LED illuminates a disc of approximately "92 »Μτη. With an opaque screen comprising regularly distributed holes with a distance of “400” μτη between the centers of the holes, the entire surface of finger D placed on the upper face 1200 is not illuminated by the LEDs. But, as we have seen above, a finger being a diffusing medium, the entire surface of the finger opposite the upper face 1200 will reflect light.
In this configuration, if an incidence zone is projected on the upper face 1200 through the hole in the opaque screen 122 corresponding to said incidence zone, a projection of the central disk with a diameter approximately equal to "544" is obtained. μτη and a projection of the peripheral ring with an outside diameter approximately equal to “1656” μτη. With regularly distributed holes with a distance of "400" μτη between the centers of the holes, the projections of the peripheral rings overlap.
Fig. 4B schematically illustrates an operation of the second embodiment of the biometric device 12.
In Fig. 4B, the device of FIG. 4C.
In Fig. 4B, we have represented the projections of 8 incidence zones on the upper face 1200. A point P, also represented in FIG. 4A, appears in the projection of three different peripheral rings. This point P is therefore imaged three times on the sensor 124: a first time at a point PI following a radius R P1 , a second time at a point P2 following a radius R P2 and a third time at a point not shown in along a radius R P3 . Imaging each point of finger D several times provides better image quality.
Each LED in the light source can be lit independently of the other LEDs. For example, it is possible to light a single LED of the light source or all the LEDs of the light source.
Fig. 6 describes a fraud detection method according to the invention.
The process described in relation to Lig. 6 is implemented by the processing module 11 for example, following an identification or authentication of a person. The biometric device 12 used can be indifferently, the biometric device 12 according to the first embodiment described in relation to FIG. 2 or according to the second embodiment described in relation to FIG. 4A.
In a step 61, the processing module 11 causes an acquisition of a first impression image by the biometric device 12 with illumination of the finger D by the light source 121 such that the entire surface of the finger D in contact with the face 1200 returns light. As we saw above, among the light rays scattered by the finger D following the lighting by the LEDs of the light source 121, only the light rays having an angle of incidence relative to the normal to the face upper 1200 between the critical angle 0 C and the limiting angle 0, are used to create the first impression image. Thus, the first image obtained is an image in which the pixels representing the peaks have a high light intensity and the pixels representing the valleys have a low light intensity compared to the pixels of the peaks. The contrast between the ridges and the valleys is therefore significant.
In a step 63, the processing module 11 causes an acquisition of at least a second imprint image with lighting of the body part by the light source 121 using an LED or a plurality of LEDs. When a plurality of LEDs are used, said LEDs are at least a predefined distance apart. The predefined distance is such that the sub-part of the finger D reflecting the light emitted by an LED is disjoined from any other sub-part of the finger D returning the light illuminated by another LED. When the LED is circular, a sub-part of finger D returning light is defined as a circular zone of finger D having a maximum light intensity at its center and whose light intensity decreases radially up to a minimum light intensity, l 'minimum light intensity being a predefined percentage (for example 1%) of the maximum light intensity. The minimum light intensity can also be defined as a light intensity which is not detectable by the sensor 124. The minimum light intensity therefore defines the limit of the sub-part of the finger D reflecting light. In a particular implementation, the processing module causes the lighting of a single predefined LED of the light source 121 to allow acquisition of a single second image. Thus, as in the document FR3015728, only a well-defined and very reduced area is illuminated directly on the part of the finger D facing the face 1200. The second image is the most informative image for determining whether the finger D is covered of real skin.
In a step 67, the processing module 11 obtains information representative of a level of light re-emitted by the finger D.
In a particular implementation, a light intensity profile is obtained for each sub-part of the finger D reflecting light. To do this, each sub-part of the finger D returning light is cut into several calculation zones. When the LED is circular, a first central calculation area has a disc shape and the other calculation areas are concentric rings surrounding the central calculation area. An average light intensity of the pixels is then calculated for each calculation area. The average light intensity of an area is calculated from the pixels of said area representing ridges of the footprint. To identify pixels corresponding to peaks, the processing module uses the first image. In the first image, each pixel corresponding to a peak has a high light intensity value compared to a pixel corresponding to a valley. In a particular implementation, the light intensity value of each pixel of the first image is compared with a predefined threshold. On the basis of this comparison, each pixel of the first image is classified either in a first category corresponding to a peak, or in a second category not corresponding to a peak. Pixels with a light intensity value above the predefined threshold are classified in the first category. Pixels whose light intensity value is less than or equal to the predefined threshold are classified in the second category. An advantage of the biometric device 12 is that, since it uses the principle of total reflection with a dark background, the first image that it generates comprises fewer pixels corresponding to defects on the face 1200 which could be interpreted by the processing module 11 like peak pixels.
During the calculation of each average light intensity of a calculation area, the processing module 11 checks for each pixel of the second image corresponding to said calculation area, whether the corresponding pixel in the first image has been classified in the pixel class crest. If so, this pixel is taken into account in the averaging. Otherwise, this pixel is not taken into account in the calculation of the average. From the average light intensity calculated for each calculation area, the processing module 11 obtains a light intensity curve and a curve of the light intensity gradient as a function of the distance between the calculation area and the center. of the central calculation area. The center of the central calculation area corresponds to the projection of the center of the LED corresponding to this calculation area on the upper face 1200.
Each curve obtained constitutes information representative of the level of light re-emitted by finger D.
In a step 68, the processing module 11 compares the information representative of the level of light re-emitted by the finger D with reference information representative of a level of light re-emitted by a real finger placed on the face 1200 to validate that the finger D placed on the face 1200 is a real finger. For finger D to be validated, the light intensity curve (respectively the gradient curve) as a function of the distance remains between two limit light intensity curves (respectively two limit light intensity gradient curves) estimated from reference curves. It is also possible to measure for each point of a predefined plurality of points of the light intensity curve (respectively of the light intensity gradient curve), the difference between the light intensity value (respectively the value of the gradient) of said point and a light intensity value of a corresponding point (ie located at the same distance from the illuminated area) on the two limit light intensity curves (respectively the two limit light intensity gradient curves ), summing these differences in absolute value and comparing this sum with a predefined limit threshold.
The reference curves (ie the limit light intensity curves and the limit light intensity gradient curves) are here light intensity curves and light intensity gradients which have been established from a large panel real fingers using the biometric device 12.
The light intensity of the pixels of the first image is a value representative of a light coupling efficiency between the sensor 124 and the finger D. In a particular implementation of step 67, the processing module 11 uses this information for reconstructing the level of light emitted by the finger D. To do this, the processing module 11 calculates for each calculation area of the second image, a scalar product between said calculation area and a corresponding area spatially in the first image (ie an identical ring located at the same position in the first image). For each zone, this dot product is divided, for example, by the norm squared of the first image (i.e. by the dot product of the first image by itself), so as to obtain for each zone a normal dot product. From the normalized scalar products calculated for each calculation area, the processing module 11 obtains a scalar product curve and a scalar product gradient curve as a function of the distance between the computation area and the illuminated area. Each curve obtained in this particular implementation constitutes the information representative of the level of light re-emitted by the finger D.
In this particular implementation, during step 68, the processing module 11 uses the curves thus obtained to validate or not the finger D. For the finger D to be validated, the curve of scalar products (respectively the curve of scalar product gradients) as a function of the distance remains between two curves of limit scalar products (respectively two curves of gradient scalar products) estimated there again from curves of reference scalar products. Again, it is also possible to measure for each point of a plurality of predefined points of the scalar product curve (respectively of the scalar product gradient curve), the difference between the value of the scalar product (respectively the value of the gradient of the dot product) of said point and a value of the dot product of a corresponding point on the two curves of limit dot products (respectively the two curves of gradient dot limit products), to sum these deviations in absolute value and to compare this sum at a predefined limit threshold.
The reference curves here are scalar product curves and scalar product gradients which have been established from a large panel of real fingers using the biometric device 12.
Note that in this particular implementation, other standardizations are possible.
Furthermore, in this particular implementation, it is possible to use an albedo measure in addition to the calculated normal scalar product values. This albedo measurement can be calculated on the first image as an average or median light intensity value calculated on all the pixels of said image or only on the pixels corresponding to the peaks in the first image. The albedo measurement obtained is compared to reference albedo measurements obtained from a large panel of real fingers using the biometric device 12. This albedo measurement is for example used to confirm or invalidate conclusions made. on the validity of a finger from the scalar product curves and / or the scalar product gradient curves.
In a particular implementation of step 63, a plurality of LEDs are lit. The LEDs of said plurality are chosen so as to obtain several subparties of the finger D returning light, each subpart being disjoint. In this particular implementation, for each sub-part, a pair of curves is obtained comprising a light intensity curve (respectively a curve of scalar products) and a light intensity gradient curve (respectively a product gradient curve scalars). Step 68 is applied to each pair of curves. In a particular implementation, if during at least one application of step 68 it appears that finger D is not a real finger, then the processing module decides that it is in the presence of a fake finger and therefore fraud.
In a particular implementation, during step 63, a plurality of second images is obtained by successively illuminating the finger D from at least one LED emitting in a different wavelength for each second image. For example, a first wavelength is below "600 nm" and a second wavelength is above "600 nm". For each wavelength, the processing module 11 implements steps 66, 67 and 68. If, during at least one application of step 68, it appears that the finger D is not a real finger, then the processing module decides that it is in the presence of a false finger and therefore of fraud. For each wavelength, the processing module uses in step 68 limit curves (ie curves of limit light intensity, gradients of limit light intensity, limit scalar products and gradients of limit scalar products ) adapted to each wavelength.
In a particular implementation, during a step 62 comprised between step 61 and 63, the processing module 11 chooses at least one LED to light up for the acquisition of each second image according to a predefined criterion. For example, when a single second image is acquired using a single LED, the lit LED is the LED closest to a position of a barycenter of the part of finger D in contact with the face 1200, or the LED closest to a position of the center of curvature, also called the core, of the crests of the part of the finger D in contact with the face 1200 as defined in the document EP3073416.
To facilitate the choice of one or more LEDs to light up, the processing module 11 can locate the finger in the first image and choose each LED to light up as a function of the position of the finger thus located.
During a comparison of fingerprints, for example in an authentication process implemented by an authentication device, two images are compared: an image acquired by the authentication device, called an acquired image, and an image a reference imprint stored in a database, called the reference image.
During the comparison, regions of the imprint of the acquired image are matched with regions of the imprint of the reference image. We then compare the matched regions to perform authentication.
In some methods, to reduce the complexity of the comparison, we rely on particular points, called minutia, corresponding to bifurcations and for the purpose of visible lines of an imprint. Each detail of the acquired image is compared to a corresponding detail of the reference image. At the end of each comparison, a similarity score is calculated for each minutia, and an overall similarity score is calculated for the entire fingerprint.
The regions and minutiae mapped are representative of the most important information in a footprint. We therefore have every interest in looking for possible frauds in the vicinity of these regions or these minutiae.
In a particular implementation, the processing module 11 focuses on the minutiae detected in the ridges of the first image to determine if the finger D is covered with real skin. During step 62, the processing module 11 chooses an LED making it possible to illuminate a point of the second image corresponding spatially to a barycenter of minutiae of the first image positively associated with minutiae of the reference image or corresponding spatially at the barycenter of a region of predefined shape of the first image comprising the highest density of minutiae positively associated with minutiae of the reference image. The region of predefined shape can for example be a circle. This particular implementation makes it possible to counter methods consisting in juxtaposing the use of a finger covered with a false skin for authentication and a real finger for validation of the finger.
Some frauds involve partially covering a finger with fake skin. In this case, two areas appear in the footprint image, these two areas being separated by a border. This border may look like a scar. It is therefore relevant to check that the light scattering properties in the finger are identical on each side of the border. Indeed, different diffusion properties would indicate the presence of fraud. In a particular implementation, following the acquisition of the first image, the processing module 11 performs an analysis of this first image in order to detect a possible border there. If a border is detected, the processing module 11 chooses during step 62 a first LED illuminating on a first side of the detected border and a second LED illuminating on a second side of the detected border. In step 63, two second images are acquired by successively lighting the first and second LEDs chosen. The following steps of the method described in relation to FIG. 6 are applied independently on each second image. The finger is validated if, for each second image, during step 68, the processing module 11 declares the finger valid and if the information representative of the level of light re-emitted by the finger D obtained for the first second image is similar according to a predefined criterion to the information representative of the level of light re-emitted by the finger D obtained for the second second image.
In a particular implementation, the method described in relation to FIG. 6 comprises two intermediate steps 64 and 65 between step 63 and step 67. During step 64, the processing module 11 triggers an acquisition of an additional image without lighting by the light source 121.
During step 65, the additional image is subtracted from the first and each second image so that in the rest of the method (ie steps 67 to 68), it is first and second images resulting from this subtraction which are used in place of the first and second original images.
This particular implementation improves the performance of the process in the event of illumination by at least one external light source.
Fig. 7 schematically illustrates an example of hardware architecture of the processing module 11.
According to the example of hardware architecture shown in FIG. 7, the processing module 11 then comprises, connected by a communication bus 110: a processor or CPU (“Central Processing Unit” in English) 111; a random access memory RAM (“Random Access Memory” in English) 112; a read only memory (ROM) 113; a storage unit such as a hard disk or a storage medium reader, such as an SD (“Secure Digital”) card reader 114; at least one communication interface 115 allowing the processing module 11 to communicate with the biometric device 12.
The processor 111 is capable of executing instructions loaded into the RAM 112 from the ROM 113, from an external memory (not shown), from a storage medium (such as an SD card), or from a communication network. When the analysis module 11 is powered up, the processor 111 is capable of reading instructions from RAM 112 and executing them. These instructions form a computer program causing the implementation, by the processor 111, of the method described in relation to FIG. 6.
The method described in relation to FIG. 6 can be implemented in software form by executing a set of instructions by a programmable machine, for example a DSP (“Digital Signal Processor” in English), a microcontroller or a GPU (graphics processor, “Graphics Processing Unit” in Anglo-Saxon terminology), or be implemented in hardware form by a dedicated machine or component, for example an FPGA (“Field-Programmable Gâte Array” in English) or an ASIC (“Application-Specific Integrated Circuit” in English).
It is noted that the processing module 11 could just as easily have been included in the biometric device 12.
Furthermore, the implementations described independently above for clarity, can be combined.
权利要求:
Claims (15)
[1" id="c-fr-0001]
1) Method for detecting fraud when using a device for capturing an imprint of a body part using a principle of total reflection with a dark background, said device comprising a first transparent blade (120) comprising a face upper part (1200) on which a body part to be checked is placed, a light source (121) comprising a plurality of LEDs lighting exclusively in the direction of the upper face, a screen opaque to light situated below the first transparent strip, a second transparent blade located below the opaque screen and an image sensor (124) located below the second transparent blade (123), the two transparent blades, the opaque screen and the image sensor being parallel and the opaque screen (122) comprising a network of holes intended to let light rays coming from the light source reach said image sensor (124), said rays s luminous allowing the image sensor to generate an image of said imprint, characterized in that the method comprises:
using said device to acquire (61) a first impression image with lighting of the body part to be checked by the light source, said first type of lighting, such as the entire surface of the body part to be checked in contact with the transparent blade returns light:
using said device to acquire (63) at least a second imprint image with lighting of the body part to be checked by the light source, said second type of lighting, such that at least one LED is lit, when a plurality of LEDs is lit, said LEDs are at least distant from a predefined distance such that the sub-portion of finger D reflecting the light emitted by an LED is disjoined from any other sub-portion of finger D returning light illuminated by another LED;
obtaining (67) for each second imprint image information representative of a level of light re-emitted by the body part to be verified by using said second imprint image and the first imprint image; and, comparing (68) said information with reference information representative of a level of light re-emitted by a real body part placed on the transparent slide to validate that the body part to be checked is a real body part.
[2" id="c-fr-0002]
2) Method according to claim 1. characterized in that the information representative of a level of light re-emitted by the body part to be checked comprises at least one curve representing a light intensity emitted by the body part and / or at least one curve of light intensity gradients emitted by the body part as a function of a distance from a center of an area directly illuminated by the light source (121) when the latter illuminates the body part according to the second type of lighting, each curve being obtained by taking into account only light intensities of pixels corresponding to peaks of the imprint of the body part, said pixels being identified using the first imprint image.
[3" id="c-fr-0003]
3) Method according to claim 1, characterized in that the information representative of a level of light re-emitted by the body part to be checked comprises at least one curve representing normalized scalar products calculated between the first and the second imprint images and / or at least one curve representing gradients of normalized scalar products calculated between the first and second imprint images, as a function of a distance from a center of an area directly illuminated by the light source (121) when that -this illuminates the body part according to the second type of lighting.
[4" id="c-fr-0004]
4) Method according to claim 3, characterized in that the information representative of a level of light re-emitted by the body part to be verified further comprises an albedo measurement, said measurement being obtained from the first image of footprint.
[5" id="c-fr-0005]
5) Method according to claim 2, 3 or 4, characterized in that when a plurality of LEDs is lit simultaneously to obtain a second imprint image, obtaining information representative of a level of light re-emitted by the body part to be checked and the comparison of said information with reference information are carried out for each sub-part to validate that the body part to be checked is a real body part.
[6" id="c-fr-0006]
6) Method according to any one of the preceding claims, characterized in that, when a plurality of second imprint images is obtained, the second type of lighting is modified for each second imprint image and obtaining the information representative of a level of light re-emitted by the body part to be checked and the comparison of said information with reference information are carried out for each second fingerprint image to validate that the body part to be checked is a real body part .
[7" id="c-fr-0007]
7) Method according to claim 6, characterized in that the modification of the second type of lighting consists in varying the wavelength emitted by the lighting system or the position of each lit LED of the lighting system for each second fingerprint image.
[8" id="c-fr-0008]
8) Method according to any one of the preceding claims, characterized in that, prior to the acquisition of each second image, the method comprises choosing at least one LED to light for the acquisition of each second image according to a first predefined criterion using the first image.
[9" id="c-fr-0009]
9) Method according to claim 8, characterized in that, the method further comprises locating the finger in the first image and choosing each LED to be lit according to the position of the finger thus located.
[10" id="c-fr-0010]
10) Method according to claim 8 or 9, characterized in that, when a border separating two zones is detected in the first impression image, a first second impression image is acquired by illuminating the body part with a first LED generating a sub-part reflecting light from a first side of the border and a second second impression image is acquired by illuminating the body part with a second LED generating a sub-part returning light from a second side of the border, and the information representative of a level of light re-emitted by the body part to be verified from each second image to be similar according to a second predefined criterion to validate that the body part to be verified is a real body part.
[11" id="c-fr-0011]
11) Method according to ia claim 10, characterized in that, when a single LED is chosen, the LED chosen is the LED closest to a position of a barycenter of the body part in contact with the transparent blade or the closest to a center of curvature of the crests of the imprint of the body part in contact with the transparent blade, or closest to a barycenter of minutiae detected in the crests of said imprint.
[12" id="c-fr-0012]
12) Method according to any one of the preceding claims, characterized
5 in that the method further comprises:
acquire an additional imprint image without lighting by the light source;
subtracting the additional fingerprint image from the first and every second image so that the first and every second fingerprint image
10 used when obtaining information representative of a level of light re-emitted by the body part to be verified and the comparison of said information with reference information are images resulting from this subtraction.
[13" id="c-fr-0013]
13) Device comprising means for implementing the method according to
15 any one of claims 1 to 12.
[14" id="c-fr-0014]
14) Equipment comprising a device according to claim 13.
[15" id="c-fr-0015]
15) Computer program, characterized in that it includes instructions
20 for implementing, by a device, the method according to any one of claims 1 to 12, when said program is executed by a calculation unit of said device.
1/5
类似技术:
公开号 | 公开日 | 专利标题
EP3388975A1|2018-10-17|Device for capturing an impression of a body part
US10282582B2|2019-05-07|Finger biometric sensor for generating three dimensional fingerprint ridge data and related methods
EP3388976A1|2018-10-17|Method for detecting fraud
EP3312771B1|2019-05-15|Device for acquiring fingerprints on the fly
EP2902943A1|2015-08-05|Method for validating the use of a real finger as a support for a fingerprint
WO2005006241A2|2005-01-20|Optical imagery device for the recognition of finger prints
EP0900427A1|1999-03-10|System for acquiring three-dimensional fingerprints and method of acquisition
EP2901370B1|2016-07-06|Method for detecting a real face
CA3000153A1|2018-09-30|Analysis process for a structure document capable of being deformed
EP3044729A1|2016-07-20|Method of validation intended to validate that an element is covered by a true skin
WO2016102854A1|2016-06-30|Method and system for acquiring and analysing fingerprints with fraud detection
EP3471016A1|2019-04-17|Method for detecting a presence of a body part carrying a fingerprint on a fingerprint sensor
EP3401837A1|2018-11-14|Device for capturing fingerprints
EP3073441B1|2019-07-10|Method for correcting an image of at least one object presented remotely to an imager and illuminated by an illumination system and camera system for carrying out said method
FR3021783A1|2015-12-04|METHOD FOR VALIDATING THE AUTHENTICITY OF A HUMAN BODY ELEMENT
FR3045884A1|2017-06-23|METHOD FOR DETECTING FRAUD TO AUTHENTICATE A FINGER
WO2017029455A1|2017-02-23|Biometric sensor with at least two light sources of different apparent sizes
EP3614305A1|2020-02-26|Authentication by optical index
EP3206160B1|2018-11-21|Method for biometric processing of images
KR20190121126A|2019-10-25|Method of detecting fraud
FR3078793A1|2019-09-13|METHOD OF AUTHENTICATING A FACE BASED ON REFLECTANCE
FR3024252A1|2016-01-29|DIGITAL ACQUISITION OF A TRACK MANUSCRIPT
EP2827282A1|2015-01-21|Method for verifying the veracity of a finger or a palm
FR3059449A1|2018-06-01|METHOD FOR DETECTING FRAUD OF AN IRIS RECOGNITION SYSTEM
同族专利:
公开号 | 公开日
FR3065306B1|2019-04-05|
US10380408B2|2019-08-13|
US20180300528A1|2018-10-18|
EP3388976A1|2018-10-17|
CN108694378A|2018-10-23|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
EP2495697A1|2009-10-26|2012-09-05|Nec Corporation|Fake finger determination device and fake finger determination method|
WO2015091701A1|2013-12-19|2015-06-25|Morpho|Method of validation intended to validate that an element is covered by a true skin|
FR2757974B1|1996-12-27|1999-02-12|Sagem|OPTICAL FINGERPRINT SENSOR|
NL2012891B1|2013-06-05|2016-06-21|Apple Inc|Biometric sensor chip having distributed sensor and control circuitry.|
CN105989327A|2015-02-02|2016-10-05|神盾股份有限公司|Fingerprint sensing device and method|
KR102277453B1|2015-02-05|2021-07-14|삼성전자주식회사|Electronic device with touch sensor and driving method thereof|
FR3034224B1|2015-03-23|2018-03-23|Morpho|DEVICE FOR VERIFYING THE VERACITY OF A DIGITAL FOOTPRINT|
FR3045884B1|2015-12-18|2017-12-08|Morpho|METHOD FOR DETECTING FRAUD TO AUTHENTICATE A FINGER|
US10089514B1|2017-03-31|2018-10-02|Synaptics Incorporated|Adaptive reference for differential capacitive measurements|US10616453B2|2017-01-11|2020-04-07|Nokia Technologies Oy|Audio and visual system including a mask functioning for a camera module and an audio transducer module|
FR3064112B1|2017-03-16|2021-06-18|Commissariat Energie Atomique|OPTICAL IMAGING DEVICE|
WO2020102945A1|2018-11-19|2020-05-28|深圳市汇顶科技股份有限公司|Fingerprint identification method, apparatus and electronic device|
法律状态:
2018-03-22| PLFP| Fee payment|Year of fee payment: 2 |
2018-10-19| PLSC| Publication of the preliminary search report|Effective date: 20181019 |
2020-03-19| PLFP| Fee payment|Year of fee payment: 4 |
2021-03-23| PLFP| Fee payment|Year of fee payment: 5 |
优先权:
申请号 | 申请日 | 专利标题
FR1753180A|FR3065306B1|2017-04-12|2017-04-12|METHOD OF DETECTING FRAUD|
FR1753180|2017-04-12|FR1753180A| FR3065306B1|2017-04-12|2017-04-12|METHOD OF DETECTING FRAUD|
US15/948,371| US10380408B2|2017-04-12|2018-04-09|Method of detecting fraud|
EP18166590.2A| EP3388976A1|2017-04-12|2018-04-10|Method for detecting fraud|
CN201810325629.1A| CN108694378A|2017-04-12|2018-04-12|The method for detecting fraud|
[返回顶部]